Goto

Collaborating Authors

 critical race theory


On the Origins of Bias in NLP through the Lens of the Jim Code

Elsafoury, Fatma, Abercrombie, Gavin

arXiv.org Artificial Intelligence

In this paper, we trace the biases in current natural language processing (NLP) models back to their origins in racism, sexism, and homophobia over the last 500 years. We review literature from critical race theory, gender studies, data ethics, and digital humanities studies, and summarize the origins of bias in NLP models from these social science perspective. We show how the causes of the biases in the NLP pipeline are rooted in social issues. Finally, we argue that the only way to fix the bias and unfairness in NLP is by addressing the social problems that caused them in the first place and by incorporating social sciences and social scientists in efforts to mitigate bias in NLP models. We provide actionable recommendations for the NLP research community to do so.


'Algorithmic Reparation' Calls for Racial Justice in AI

WIRED

Forms of automation such as artificial intelligence increasingly inform decisions about who gets hired, is arrested, or receives health care. Examples from around the world articulate that the technology can be used to exclude, control, or oppress people and reinforce historic systems of inequality that predate AI. Now teams of sociologists and computer science researchers say the builders and deployers of AI models should consider race more explicitly, by leaning on concepts such as critical race theory and intersectionality. Critical race theory is a method of examining the impact of race and power first developed by legal scholars in the 1970s that grew into an intellectual movement influencing fields including education, ethnic studies, and sociology. Intersectionality acknowledges that people from different backgrounds experience the world in different ways based on their race, gender, class, or other forms of identity.


New digital tools to track illegal wildlife trade online

AIHub

Pangolins, also known as scaly anteaters, are currently the most trafficked mammal species. Criminals can be resourceful and unrelenting in their efforts to find a way around obstacles. Wildlife traffickers are no exception. Today's trade in wildlife and wildlife products has shifted from physical markets to online marketplaces where traffickers apply e-commerce business models and use encrypted messages in an attempt to evade detection by law enforcement. While the move towards online platforms started several years before the Covid-19 pandemic, the restrictions imposed to contain the virus accelerated this digital transformation.


The Absurd Idea to Put Bodycams on Teachers Is ... Feasible?

#artificialintelligence

In the realm of international cybersecurity, "dual use" technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents. The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads.


Google Artificial Intelligence Team Draws From Critical Race Theory, Internal Document Shows

#artificialintelligence

Google's artificial intelligence (AI) work draws from Critical Race Theory, a philosophical framework that posits that nearly every interaction should be seen as a racial power struggle and seeks to "disrupt" American society which it views as immutably racist, according to a company document obtained by The Daily Wire. A screenshot of an internal company page, obtained by The Daily Wire, says under the header "Ethical AI": We focus on AI at the intersection of Machine Learning and society, developing projects that inform the general public; bringing the complexities of individual identity into the development of human-centric AI; and creating ways to measure different kinds of biases and stereotypes. Out [sic] work includes lessons from gender studies, critical race theory, computational linguistics, computer vision, engineering education, and beyond! Google's Ethical AI team appears intent on encoding far-left ideology into its algorithms even after previous leaders of the team plunged the section into chaos over their insistence on overlaying progressive politics onto mathematics. Until recently, the team was co-led by Timnit Gebru, who cofounded a "Black in AI" racial affinity group and in 2018 coauthored a paper saying facial recognition technology was less accurate at recognizing women and minorities.


The Whiteness of AI

#artificialintelligence

It is a truth little acknowledged that a machine in possession of intelligence must be white. Typing terms like "robot" or "artificial intelligence" into a search engine will yield a preponderance of stock images of white plastic humanoids. Perhaps more notable still, these machines are not only white in colour, but the more human they are made to look, the more their features are made ethnically White.Footnote 1 In this paper, we problematize the often unnoticed and unremarked-upon fact that intelligent machines are predominantly conceived and portrayed as White. We argue that this Whiteness both illuminates particularities of what (Anglophone Western) society hopes for and fears from these machines, and situates these affects within long-standing ideological structures that relate race and technology. Race and technology are two of the most powerful and important categories for understanding the world as it has developed since at least the early modern period.